429 research outputs found

    Why anthropic reasoning cannot predict Lambda

    Full text link
    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.Comment: 4 pages, 1 figure. Discussion slighlty expanded, refs added, conclusions unchanged. Matches published versio

    Monolithic or hierarchical star formation? A new statistical analysis

    Full text link
    We consider an analytic model of cosmic star formation which incorporates supernova feedback, gas accretion and enriched outflows, reproducing the history of cosmic star formation, metallicity, supernovae type II rates and the fraction of baryons allocated to structures. We present a new statistical treatment of the available observational data on the star formation rate and metallicity that accounts for the presence of possible systematics. We then employ a Bayesian Markov Chain Monte Carlo method to compare the predictions of our model with observations and derive constraints on the 7 free parameters of the model. We find that the dust correction scheme one chooses to adopt for the star formation data is critical in determining which scenario is favoured between a hierarchical star formation model, where star formation is prolonged by accretion, infall and merging, and a monolithic scenario, where star formation is rapid and efficient. We distinguish between these modes by defining a characteristic minimum mass, M > 10^{11} M_solar, in our fiducial model, for early type galaxies where star formation occurs efficiently. Our results indicate that the hierarchical star formation model can achieve better agreement with the data, but that this requires a high efficiency of supernova-driven outflows. In a monolithic model, our analysis points to the need for a mechanism that drives metal-poor winds, perhaps in the form of supermassive black hole-induced outflows. Furthermore, the relative absence of star formation beyond z ~ 5 in the monolithic scenario requires an alternative mechanism to dwarf galaxies for reionizing the universe at z ~ 11, as required by observations of the microwave background. While the monolithic scenario is less favoured in terms of its quality-of-fit, it cannot yet be excluded.Comment: Expanded discussion on the role of mergers and on reionization in the monolithic scenario, refs added, main results unchanged. Matches version to appear in MNRA

    Monolithic or hierarchical star formation? A new statistical analysis

    Get PDF
    We consider an analytic model of cosmic star formation which incorporates supernova feedback, gas accretion and enriched outflows, reproducing the history of cosmic star formation, metallicity, supernovae type II rates and the fraction of baryons allocated to structures. We present a new statistical treatment of the available observational data on the star formation rate and metallicity that accounts for the presence of possible systematics. We then employ a Bayesian Markov Chain Monte Carlo method to compare the predictions of our model with observations and derive constraints on the 7 free parameters of the model. We find that the dust correction scheme one chooses to adopt for the star formation data is critical in determining which scenario is favoured between a hierarchical star formation model, where star formation is prolonged by accretion, infall and merging, and a monolithic scenario, where star formation is rapid and efficient. We distinguish between these modes by defining a characteristic minimum mass, M > 10^{11} M_solar, in our fiducial model, for early type galaxies where star formation occurs efficiently. Our results indicate that the hierarchical star formation model can achieve better agreement with the data, but that this requires a high efficiency of supernova-driven outflows. In a monolithic model, our analysis points to the need for a mechanism that drives metal-poor winds, perhaps in the form of supermassive black hole-induced outflows. Furthermore, the relative absence of star formation beyond z ~ 5 in the monolithic scenario requires an alternative mechanism to dwarf galaxies for reionizing the universe at z ~ 11, as required by observations of the microwave background. While the monolithic scenario is less favoured in terms of its quality-of-fit, it cannot yet be excluded.Comment: Expanded discussion on the role of mergers and on reionization in the monolithic scenario, refs added, main results unchanged. Matches version to appear in MNRA

    The cosmological constant and the paradigm of adiabaticity

    Full text link
    We discuss the value of the cosmological constant as recovered from CMB and LSS data and the robustness of the results when general isocurvature initial conditions are allowed for, as opposed to purely adiabatic perturbations. The Bayesian and frequentist statistical approaches are compared. It is shown that pre-WMAP CMB and LSS data tend to be incompatible with a non-zero cosmological constant, regardless of the type of initial conditions and of the statistical approach. The non-adiabatic contribution is constrained to be < 40% (2sigma c.l.).Comment: 9 pages, 5 figures, to appear in New Astronomy Reviews, Proceedings of the 2nd CMBNET Meeting, 20-21 February 2003, Oxford, U

    Applications of Bayesian model selection to cosmological parameters

    Get PDF
    Bayesian model selection is a tool for deciding whether the introduction of a new parameter is warranted by the data. I argue that the usual sampling statistic significance tests for a null hypothesis can be misleading, since they do not take into account the information gained through the data, when updating the prior distribution to the posterior. In contrast, Bayesian model selection offers a quantitative implementation of Occam's razor. I introduce the Savage-Dickey density ratio, a computationally quick method to determine the Bayes factor of two nested models and hence perform model selection. As an illustration, I consider three key parameters for our understanding of the cosmological concordance model. By using Wilkinson Microwave Anisotropy Probe (WMAP) 3-year data complemented by other cosmological measurements, I show that a non-scale-invariant spectral index of perturbations is favoured for any sensible choice of prior. It is also found that a flat universe is favoured with odds of 29:1 over non-flat models, and that there is strong evidence against a cold dark matter isocurvature component to the initial conditions which is totally (anti)correlated with the adiabatic mode (odds of about 2000:1), but that this is strongly dependent on the prior adopted. These results are contrasted with the analysis of WMAP 1-year data, which were not informative enough to allow a conclusion as to the status of the spectral index. In a companion paper, a new technique to forecast the Bayes factor of a future observation is presente

    A Global Analysis of Dark Matter Signals from 27 Dwarf Spheroidal Galaxies using 11 Years of Fermi-LAT Observations

    Get PDF
    We search for a dark matter signal in 11 years of Fermi-LAT gamma-ray data from 27 Milky Way dwarf spheroidal galaxies with spectroscopically measured JJ-factors. Our analysis includes uncertainties in JJ-factors and background normalisations and compares results from a Bayesian and a frequentist perspective. We revisit the dwarf spheroidal galaxy Reticulum II, confirming that the purported gamma-ray excess seen in Pass 7 data is much weaker in Pass 8, independently of the statistical approach adopted. We introduce for the first time posterior predictive distributions to quantify the probability of a dark matter detection from another dwarf galaxy given a tentative excess. A global analysis including all 27 dwarfs shows no indication for a signal in nine annihilation channels. We present stringent new Bayesian and frequentist upper limits on the dark matter cross section as a function of dark matter mass. The best-fit dark matter parameters associated with the Galactic Centre excess are excluded by at least 95% confidence level/posterior probability in the frequentist/Bayesian framework in all cases. However, from a Bayesian model comparison perspective, dark matter annihilation within the dwarfs is not strongly disfavoured compared to a background-only model. These results constitute the highest exposure analysis on the most complete sample of dwarfs to date. Posterior samples and likelihood maps from this study are publicly available.Comment: 27+5 pages, 10 figures. Version 2 corresponds to the Accepted Manuscript version of the JCAP article; the analysis has been updated to Pass 8 R3 data plus 4FGL catalogue, with one more year of data and more annihilation channels. Supplementary Material (tabulated limits, likelihoods, and posteriors) is available on Zenodo at https://doi.org/10.5281/zenodo.261226

    Cosmic Microwave Background Anisotropies: Beyond Standard Parameters

    Get PDF
    In the first part of this work, I review the theoretical framework of cosmological perturbation theory necessary to understand the generation and evolution of cosmic microwave background (CMB) anisotropies. Using analytical and numerical techniques, in the second part I describe the impact on the CMB power spectra of the standard cosmological parameters (such as the matter-energy budget of the Universe, its curvature, the amplitude and spectral properties of the primordial fluctuations, etc.). I introduce the most general type of initial conditions for the primordial perturbations, deriving a new analytical approximation for the neutrino isocurvature modes. In the third part, I discuss the issue of extracting constraints on the parameters of interest from the recent, high-quality CMB measurements, presenting the relevant statistical tools and focusing on the Fisher matrix analysis as a technique to produce reliable forecasts for the performance of future observations. I then apply those tools to the study of several possible extensions of the (currently) standard Lambda CDM model: the presence of extra relativistic particles, possible time variations of the fine structure constant, and the value of the primordial Helium mass fraction. I also use the CMB as a tool to study the very early Universe, via its dependence on the type of initial conditions: I relax the assumption of purely adiabatic initial conditions and discuss the observational consequences and constraints on the presence of general isocurvature modes.Comment: PhD thesis, 231 pages, 50+ low resolution figures to comply with arXiv restrictions. Higher resolution version available from http://mpej.unige.ch/~trotta/html/thesis.ht

    Bayesian Calibrated Significance Levels Applied to the Spectral Tilt and Hemispherical Asymmetry

    Get PDF
    Bayesian model selection provides a formal method of determining the level of support for new parameters in a model. However, if there is not a specific enough underlying physical motivation for the new parameters it can be hard to assign them meaningful priors, an essential ingredient of Bayesian model selection. Here we look at methods maximizing the prior so as to work out what is the maximum support the data could give for the new parameters. If the maximum support is not high enough then one can confidently conclude that the new parameters are unnecessary without needing to worry that some other prior may make them significant. We discuss a computationally efficient means of doing this which involves mapping p-values onto upper bounds of the Bayes factor (or odds) for the new parameters. A p-value of 0.05 (1.96σ1.96\sigma) corresponds to odds less than or equal to 5:2 which is below the `weak' support at best threshold. A p-value of 0.0003 (3.6σ3.6\sigma) corresponds to odds of less than or equal to 150:1 which is the `strong' support at best threshold. Applying this method we find that the odds on the scalar spectral index being different from one are 49:1 at best. We also find that the odds that there is primordial hemispherical asymmetry in the cosmic microwave background are 9:1 at best.Comment: 5 pages. V2: clarifying comments added in response to referee report. Matches version to appear in MNRA

    Quantifying the tension between the Higgs mass and (g-2)_mu in the CMSSM

    Full text link
    Supersymmetry has been often invoqued as the new physics that might reconcile the experimental muon magnetic anomaly, a_mu, with the theoretical prediction (basing the computation of the hadronic contribution on e^+ e^- data). However, in the context of the CMSSM, the required supersymmetric contributions (which grow with decreasing supersymmetric masses) are in potential tension with a possibly large Higgs mass (which requires large stop masses). In the limit of very large m_h supersymmetry gets decoupled, and the CMSSM must show the same discrepancy as the SM with a_mu . But it is much less clear for which size of m_h does the tension start to be unbearable. In this paper, we quantify this tension with the help of Bayesian techniques. We find that for m_h > 125 GeV the maximum level of discrepancy given current data (~ 3.3 sigma) is already achieved. Requiring less than 3 sigma discrepancy, implies m_h < 120 GeV. For a larger Higgs mass we should give up either the CMSSM model or the computation of a_mu based on e^+ e^-; or accept living with such inconsistency
    corecore